Advanced Lane Finding Project

The goals / steps of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image ("birds-eye view").
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

First step:

The first step is to compute the camera calibration matrix (mtx) and distortion coefficients (dist) given a set of chessboard calibration images from (../camera_cal/). Camera calibration is done by function cv2.calibrateCamera() to compute the transformation between 3D object points in the world and 2D image points.

In [4]:

Second step:

The second step is to apply distortion correction to the raw images using cv2.undistort(). Distortion correction is used to make sure that the geometrical shape of the objects is represend consistently, no matter where they appear in the image.

In [5]:
In [6]:
In [17]:

Fourth step:

The fourth step is to apply perspective transform in the image to get bird-eyes view. Perspective transform is used to transform an image such that we are effectively viewing object from a different angle or direction.

In [9]:

Fifth step - Detect lane pixels and fit to find the lane boundary

In this step, histogram values are used to get the areas with high pixel values which are further used to detect the lane pixels and boundaries of left and right side of the lane lines.

In [10]:

Sixth step

In this step, the identified lane pixel values help to determine the radius of curvature and position of the vehicle from the center.

In [12]:
Radius of curvature: 482.68 m
Center offset: 0.04 m

Seventh step

In this step, the detected lane boundaries are warped back onto the original image using inverse perspective transform

Eighth step

Output visual display of the lane boundaries and display of lane curvature and vehicle position.

In [16]:
In [15]:
[MoviePy] >>>> Building video project_videoOutput.mp4
[MoviePy] Writing video project_videoOutput.mp4
100%|█████████▉| 1260/1261 [06:15<00:00,  3.35it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: project_videoOutput.mp4 

In [ ]: